Code
from utils import CDNOW, Stan, StanQuap
from scipy.optimize import minimize
import arviz as az
import polars as pl
import numpy as np
import matplotlib.pyplot as plt
%config InlineBackend.figure_formats = ['svg']In this notebook we show how to fit a BG/NBD model in Stan. We compare the results with the lifetimes package. The model is presented in the paper: Fader, P. S., Hardie, B. G., & Lee, K. L. (2005). “Counting your customers” the easy way: An alternative to the Pareto/NBD model. Marketing science, 24(2), 275-284.
Faced with a database containing information on the frequency and timing of transactions for a list of customers, it is natural to try to make forecasts about future purchasing. These projections often range from aggregate sales trajectories (e.g., for the next 52 weeks), to individual-level conditional expectations (i.e., the best guess about a particular customer’s future purchasing, given information about his past behavior). The Pareto/NBD is robust model used for this purpose. It was introduced in “Counting Your Customers” framework originally proposed by Schmittlein, Morrison, and Colombo (1987), hereafter SMC. This model describes repeat-buying behavior in settings where customer “dropout” is unobserved: it assumes that customers buy at a steady rate (albeit in a stochastic manner) for a period of time, and then become inactive. More specifically, time to “dropout” is modeled using the Pareto (exponential-gamma mixture) timing model, and repeat-buying behavior while active is modeled using the NBD (Poisson-gamma mixture) counting model.
In this notebook, we implement the beta-geometric/negative binomial distribution (BG/NBD), which represents a slight variation in the behavioral “story” that lies at the heart of SMC’s original work, but it is vastly easier to implement. The model parameters can be obtained quite easily using Python and Stan, with no appreciable loss in the model’s ability to fit or predict customer purchasing patterns. We will introduce the BG/NBD model from first principles, and present the expressions required for making individual-level statements about future buying behavior.
Before developing the BG/NBD model, we briefly review the Pareto/NBD model. Then we outline the assumptions of the BG/NBD model, deriving the key expressions at the individual-level, and for a randomly-chosen individual.
The Pareto/NBD model is based on five assumptions:
While active, the number of transactions made by a customer in a time period of length \(t\) is distributed Poisson with transaction rate \(\lambda_t\).
\[ x \sim \text{Poisson}(\lambda) \]
Heterogeneity in transaction rates across customers follows a gamma distribution with shape parameter \(r\) and scale parameter \(\alpha\).
\[ \lambda \sim \text{Gamma}(r, \alpha) \]
Each customer has an unobserved “lifetime” of length \(\tau\). This point at which the customer becomes inactive is distributed exponential with dropout rate \(\mu\).
\[ \tau \sim \text{Exponential}(\mu) \]
Heterogeneity in dropout rates across customers follows a gamma distribution with shape parameter \(s\) and scale parameter \(\beta\).
\[ \mu \sim \text{Gamma}(s, \beta) \]
The transaction rate \(\lambda\) and the dropout rate \(\mu\) vary independently across customers.
The Pareto/NBD (and, as we will see shortly, the BG/NBD ) requires only two pieces of information about each customer’s past purchasing history: his “recency” (when his last transaction occurred) and “frequency” (how many transactions he made in a specified time period). The notation used to represent this information is \(\left(X=x, t_{x}, T\right)\), where \(x\) is the number of transactions observed in the time period \((0, T]\) and \(t_{x}\left(0<t_{x} \leq T\right)\) is the time of the last transaction. Using these two key summary statistics, SMC derive expressions for a number of managerially relevant quantities, such as:
The likelihood function associated with the Pareto/NBD model is quite complex, involving numerous evaluations of the Gaussian hypergeometric function. Multiple evaluations of the Gaussian hypergeometric are very demanding from a computational standpoint. Furthermore, the precision of some numerical procedures used to evaluate this function can vary substantially over the parameter space; this can cause major problems for numerical optimization routines as they search for the maximum of the likelihood function. In contrast, the BG/NBD model, can be implemented very quickly and efficiently via MLE and MCMC.
Most aspects of the BG/NBD model directly mirror those of the Pareto/NBD. The only difference lies in the story being told about how/when customers become inactive. The Pareto timing model assumes that dropout can occur at any point in time, independent of the occurrence of actual purchases. If we assume instead that dropout occurs immediately after a purchase, we can model this process using the beta-geometric (BG) model.
More formally, the BG/NBD model is based on the following five assumptions (the first two of which are identical to the corresponding Pareto/NBD assumptions):
While active, the number of transactions made by a customer follows a Poisson process with transaction rate \(\lambda\). This is equivalent to assuming that the time between transactions is distributed exponential with transaction rate \(\lambda\), i.e.,
\[ f\left(t_{j} \mid t_{j-1} ; \lambda\right)=\lambda e^{-\lambda\left(t_{j}-t_{j-1}\right)}, \quad t_{j}>t_{j-1} \geq 0 \]
\[ \text{"Wait"} \sim \text{Exponential}(\lambda) \]
Heterogeneity in \(\lambda\) follows a gamma distribution with pdf
\[ \begin{equation*} f(\lambda \mid r, \alpha)=\frac{\alpha^{r} \lambda^{r-1} e^{-\lambda \alpha}}{\Gamma(r)}, \quad \lambda>0 \end{equation*} \]
\[ \lambda \sim \text{Gamma}(r,\alpha) \]
After any transaction, a customer becomes inactive with probability \(p\). Therefore the point at which the customer “drops out” is distributed across transactions according to a (shifted) geometric distribution with pmf
\[ P(\text { inactive immediately after } j \text { th transaction })=p(1-p)^{j-1}, \quad j=1,2,3, \ldots \]
\[ \text{"Drop Out"} \sim \text{Bernoulli}(p) \]
Heterogeneity in \(p\) follows a beta distribution with pdf
\[ \begin{equation*} f(p \mid a, b)=\frac{p^{a-1}(1-p)^{b-1}}{B(a, b)}, \quad 0 \leq p \leq 1 \end{equation*} \]
where \(B(a, b)\) is the beta function, which can be expressed in terms of gamma functions: \(B(a, b)=\Gamma(a) \Gamma(b) / \Gamma(a+b)\).
\[ p \sim \text{Beta}(a, b) \]
The transaction rate \(\lambda\) and the dropout probability \(p\) vary independently across customers.
Consider a customer who had \(x\) transactions in the period \((0, T]\) with the transactions occurring at \(t_{1}, t_{2}, \ldots, t_{x}\) :
We derive the individual-level likelihood function in the following manner:
Therefore,
\[ \begin{aligned} L\left(\lambda, p \mid t_{1}, t_{2}, \ldots, t_{x}, T\right) & =\lambda e^{-\lambda t_{1}}(1-p) \lambda e^{-\lambda\left(t_{2}-t_{1}\right)} \cdots(1-p) \lambda e^{-\lambda\left(t_{x}-t_{x-1}\right)}\left\{p+(1-p) e^{-\lambda\left(T-t_{x}\right)}\right\} \\ & =p(1-p)^{x-1} \lambda^{x} e^{-\lambda t_{x}}+(1-p)^{x} \lambda^{x} e^{-\lambda T} \end{aligned} \]
As pointed out earlier for the Pareto/NBD, note that information on the timing of the \(x\) transactions is not required; a sufficient summary of the customer’s purchase history is \(\left(X=x, t_{x}, T\right)\).
Similar to SMC, we assume that all customers are active at the beginning of the observation period; therefore the likelihood function for a customer making 0 purchases in the interval \((0, T]\) is the standard exponential survival function:
\[ L(\lambda \mid X=0, T)=e^{-\lambda T} \]
Thus we can write the individual-level likelihood function as
\[ \begin{equation*} L(\lambda, p \mid X=x, T)=(1-p)^{x} \lambda^{x} e^{-\lambda T}+\delta_{x>0} p(1-p)^{x-1} \lambda^{x} e^{-\lambda t_{x}} \end{equation*} \]
where \(\delta_{x>0}=1\) if \(x>0,0\) otherwise.
Let the random variable \(X(t)\) denote the number of transactions occurring in a time period of length \(t\) (with a time origin of 0 ). To derive an expression for the \(P(X(t)=x)\), we recall the fundamental relationship between inter-event times and the number of events: \(X(t) \geq x \Leftrightarrow\) \(T_{x} \leq t\) where \(T_{x}\) is the random variable denoting the time of the \(x\) th transaction. Given our assumption regarding the nature of the dropout process,
\[ \begin{align*} P(X(t)=x)=& P \text{(active after } x \text{th purchase}) \\ &\cdot P\left(T_{x} \leq t \text { and } T_{x+1}>t\right) +\delta_{x>0} \\ &\cdot P(\text { becomes inactive after } x \text { th purchase }) \\ &\cdot P\left(T_{x} \leq t\right) \end{align*} \qquad(1)\]
Given the assumption that the time between transactions is characterized by the exponential distribution, \(P\left(T_{x} \leq t\right.\) and \(\left.T_{x+1}>t\right)\) is simply the Poisson probability that \(X(t)=x\), and \(P\left(T_{x} \leq t\right)\) is the Erlang- \(x\) cdf. Therefore
\[ \begin{equation*} P(X(t)=x \mid \lambda, p)=(1-p)^{x} \frac{(\lambda t)^{x} e^{-\lambda t}}{x!}+\delta_{x>0} p(1-p)^{x-1}\cdot\left[1-e^{-\lambda t} \sum_{j=0}^{x-1} \frac{(\lambda t)^{j}}{j!}\right] \end{equation*} \qquad(2)\]
Given that the number of transactions follows a Poisson process, \(E[X(t)]\) is simply \(\lambda t\) if the customer is active at \(t\). For a customer who becomes inactive at \(\tau \leq t\), the expected number of transactions in the period \((0, \tau]\) is \(\lambda \tau\).
But what is the likelihood that a customer becomes inactive at \(\tau\) ? Conditional on \(\lambda\) and \(p\),
\[ \begin{aligned} P(\tau>t)=P(\text { active at } t \mid \lambda, p) & =\sum_{j=0}^{\infty}(1-p)^{j} \frac{(\lambda t)^{j} e^{-\lambda t}}{j!} \\ & =e^{-\lambda p t} \end{aligned} \]
This implies that the pdf of the dropout time is given by \(g(\tau \mid \lambda, p)=\lambda p e^{-\lambda p \tau}\). (Note that this takes on an exponential form. But it features an explicit association with the transaction rate \(\lambda\), in contrast with the Pareto/NBD, which has an exponential dropout process that is independent of the transaction rate.) It follows that the expected number of transactions in a time period of length \(t\) is given by
\[ \begin{align*} E(X(t) \mid \lambda, p) & =\lambda t \cdot P(\tau>t)+\int_{0}^{t} \lambda \tau g(\tau \mid \lambda, p) d \tau \\ & =\frac{1}{p}-\frac{1}{p} e^{-\lambda p t} \end{align*} \]
All the expressions developed above are conditional on the transaction rate \(\lambda\) and the dropout probability \(p\), both of which are unobserved. To derive the equivalent expressions for a randomly chosen customer, we take the expectation of the individual-level result over the mixing distributions for \(\lambda\) and \(p\), as given in (1) and (2). This yields the follow results.
\[ \begin{align*} L\left(r, \alpha, a, b \mid X=x, t_{x}, T\right)&=\frac{B(a, b+x)}{B(a, b)} \frac{\Gamma(r+x) \alpha^{r}}{\Gamma(r)(\alpha+T)^{r+x}} \\ &+\delta_{x>0} \frac{B(a+1, b+x-1)}{B(a, b)} \frac{\Gamma(r+x) \alpha^{r}}{\Gamma(r)\left(\alpha+t_{x}\right)^{r+x}} \end{align*} \]
The four BG/NBD model parameters \((r, \alpha, a, b)\) can be estimated via the method of maximum likelihood in the following manner. Suppose we have a sample of \(N\) customers, where customer \(i\) had \(X_{i}=x_{i}\) transactions in the period \((0, T_{i}]\), with the last transaction occurring at \(t_{x_{i}}\). The sample log-likelihood function given by is
\[ \begin{equation*} L L(r, \alpha, a, b)=\sum_{i=1}^{N} \ln \left[L\left(r, \alpha, a, b \mid X_{i}=x_{i}, t_{x_{i}}, T_{i}\right)\right] \end{equation*} \]
This can be maximized using standard numerical optimization routines.
\[ \begin{align*} P(X(t)= x \mid r, \alpha, a, b) &=\frac{B(a, b+x)}{B(a, b)} \frac{\Gamma(r+x)}{\Gamma(r) x!}\left(\frac{\alpha}{\alpha+t}\right)^{r}\left(\frac{t}{\alpha+t}\right)^{x} \\ &+\delta_{x>0} \frac{B(a+1, b+x-1)}{B(a, b)} \\ &\cdot\left[1-\left(\frac{\alpha}{\alpha+t}\right)^{r}\left\{\sum_{j=0}^{x-1} \frac{(\Gamma(r+j)}{\Gamma(r) j!}\left(\frac{t}{\alpha+t}\right)^{j}\right\}\right] \end{align*} \qquad(3)\]
\[ \begin{equation*} E(X(t) \mid r, \alpha, a, b)=\frac{a+b-1}{a-1}\left[1-\left(\frac{\alpha}{\alpha+t}\right)^{r}{ }_{2} F_{1}\left(r, b ; a+b-1 ; \frac{t}{\alpha+t}\right)\right] \end{equation*} \]
where \({ }_{2} F_{1}(\cdot)\) is the Gaussian hypergeometric function. (See the Appendix for details of the derivation.)
Note that this final expression requires a single evaluation of the Gaussian hypergeometric function. But it is important to emphasize that this expectation is only used after the likelihood function has been maximized. A single evaluation of the Gaussian hypergeometric function for a given set of parameters is relatively straightforward, and can be closely approximated with a polynomial series, even in a modeling environment such as Microsoft Excel.
In order for the BG/NBD model to be of use in a forward-looking customer-base analysis, we need to obtain an expression for the expected number of transactions in a future period of length \(t\) for an individual with past observed behavior \(\left(X=x, t_{x}, T\right)\). We provide a careful derivation in the Appendix, but here is the key expression:
\[ \begin{align*} & E\left(Y(t) \mid X=x, t_{x}, T, r, \alpha, a, b\right)= \\ & \qquad \frac{\frac{a+b+x-1}{a-1}\left[1-\left(\frac{\alpha+T}{\alpha+T+t}\right)^{r+x}{ }_{2} F_{1}\left(r+x, b+x ; a+b+x-1 ; \frac{t}{\alpha+T+t}\right)\right]}{1+\delta_{x>0} \frac{a}{b+x-1}\left(\frac{\alpha+T}{\alpha+t_{x}}\right)^{r+x}} \end{align*} \]
Once again, this expectation requires a single evaluation of the Gaussian hypergeometric function for any customer of interest, but this is not a burdensome task. The remainder of the expression is simple arithmetic.
Maximum likelihood estimates (MLE) of the model parameters (\(r\), \(\alpha\), \(a\), \(b\)) are obtained by maximizing the log-likelihood function given in Equation 4 above. Standard numerical optimization methods are employed, using the SciPy minimize and Stan’s optimize method, to obtain the parameter estimates. To implement the model in SciPy, we rewrite the log-likelihood function, Equation 4, as
\[ L(r, \alpha, a, b | X = x, t_x, T) = A_1 \cdot A_2 \cdot (A_3 + \delta_{x > 0} A_4) \]
where
\[ A_1 = \frac{\Gamma(r + x)\alpha^r}{\Gamma(r)} \quad A_2 = \frac{\Gamma(a + b)\Gamma(b + x)}{\Gamma(b)\Gamma(a + b + x)} \]
\[ A_3 = \left( \frac{1}{\alpha + T} \right)^{r + x} \quad A_4 = \left( \frac{a}{b + x - 1} \right) \left( \frac{1}{\alpha + t_x} \right)^{r + x} \]
from utils import CDNOW, Stan, StanQuap
from scipy.optimize import minimize
import arviz as az
import polars as pl
import numpy as np
import matplotlib.pyplot as plt
%config InlineBackend.figure_formats = ['svg']data = CDNOW(master=False, calib_p=273) # 39 week calibration period
rfm_data = data.rfm_summary().select("P1X", "t_x", "T").collect().to_numpy()
p1x, t_x, T = rfm_data[:, 0], rfm_data[:, 1], rfm_data[:, 2]Recall from the paper the following definitions:
p1x represents the number of repeat purchases the customer has made. This means that it’s one less than the total number of purchases. This is actually slightly wrong. It’s the count of time periods the customer had a purchase in. So if using days as units, then it’s the count of days the customer had a purchase on.T represents the age of the customer in whatever time units chosen (weekly, in the above dataset). This is equal to the duration between a customer’s first purchase and the end of the period under study.t_x represents the age of the customer when they made their most recent purchases. This is equal to the duration between a customer’s first purchase and their latest purchase. (Thus if they have made only 1 purchase, the recency is 0.)The BG/NBD model is a probabilistic model that describes the buying behavior of a customer in the non-contractual setting. It is based on the following assumptions for each customer:
\[ f(t_{j} \mid t_{j-1}; \lambda) = \lambda \exp(-\lambda (t_{j} - t_{j - 1})), \quad t_{j} \geq t_{j - 1} \geq 0 \]
\[ f(\lambda \mid r, \alpha) = \frac{\alpha^{r}\lambda^{r - 1}\exp(-\lambda \alpha)}{\Gamma(r)}, \quad \\lambda > 0 \]
After any transaction, a customer becomes inactive with probability \(p\).
Heterogeneity in \(p\) follows a beta distribution with pdf
\[ f(p \mid a, b) = \frac{\Gamma(a + b)}{\Gamma(a) \Gamma(b)} p^{a - 1}(1 - p)^{b - 1}, \quad 0 \leq p \leq 1 \]
Instead of estimating \(\lambda\) and \(p\) for each specific customer, we do it for a randomly chosen customer, i.e. we work with the expected values of the parameters. Hence, we are interesting in finding the posterior distribution of the parameters \(r\), \(\alpha\), \(a\), and \(b\).
Let us implement the standard approach highlighted in the paper and find the parameters \(r, \alpha, a, \text{ and } b\) that maximize the posterior log-likelihood function.
import numpy as np
from scipy.special import gammaln, hyp2f1, gamma, factorial
def bgnbd_ll(x, p1x, t_x, T):
r, alpha, a, b = x
# Logarithm calculations with numerical stability
log_alpha = np.log(np.clip(alpha, 1e-10, None)) # Avoid log(0) by clipping to a small value
log_alpha_t_x = np.log(np.clip(alpha + t_x, 1e-10, None))
# Components of the log-likelihood
D_1 = (
gammaln(r + p1x)
- gammaln(r)
+ gammaln(a + b)
+ gammaln(b + p1x)
- gammaln(b)
- gammaln(a + b + p1x)
)
D_2 = r * log_alpha - (r + p1x) * log_alpha_t_x
C_3 = ((alpha + t_x) / (alpha + T)) ** (r + p1x)
C_4 = a / (b + p1x - 1)
# Handle cases where p1x > 0 and apply log to valid values
log_term = np.log(np.clip(C_3 + C_4, 1e-10, None))
result = D_1 + D_2 + np.where(p1x > 0, log_term, np.log(np.clip(C_3, 1e-10, None)))
return -np.sum(result)
def bgnbd_est():
guess={'r': 0.01, 'alpha': 0.01, 'a': 0.01, 'b': 0.01}
# Bounds for the optimization
bnds = [(1e-6, np.inf) for _ in range(4)]
# Optimization using minimize
return minimize(
bgnbd_ll,
x0=list(guess.values()),
method="BFGS",
args=(p1x, t_x, T)
)
result = bgnbd_est()
r, alpha, a, b = result.x
ll = result.fun
print(
f"""r = {r:0.4f}
α = {alpha:0.4f}
a = {a:0.4f}
b = {b:0.4f}
Log-Likelihood = {-ll:0.4f}"""
)
index = index=["r", "α", "a", "b"]
scipy_params = result.x
var_mat = result.hess_inv
se = np.sqrt(np.diag(var_mat))
plo = scipy_params - 1.96 * se
phi = scipy_params + 1.96 * se
pl.DataFrame(
{
"Parameter": index,
"Coef": scipy_params,
"SE (Coef)": se,
"5.5%": plo,
"94.5%": phi,
}
)r = 0.2426
α = 4.4136
a = 0.7929
b = 2.4259
Log-Likelihood = -9582.4292
| Parameter | Coef | SE (Coef) | 5.5% | 94.5% |
|---|---|---|---|---|
| str | f64 | f64 | f64 | f64 |
| "r" | 0.242595 | 0.012417 | 0.218258 | 0.266931 |
| "α" | 4.413603 | 0.476973 | 3.478735 | 5.348471 |
| "a" | 0.792923 | 0.180615 | 0.438918 | 1.146928 |
| "b" | 2.425909 | 0.777435 | 0.902136 | 3.949683 |
lifetimes ImplementationSource: BetaGeoFitter, fit function
import autograd.numpy as np
from autograd.scipy.special import gammaln, beta, gamma
from autograd import value_and_grad, hessian
def negative_log_likelihood(log_params, x, t_x, T):
params = np.exp(log_params)
r, alpha, a, b = params
A_1 = gammaln(r + x) - gammaln(r) + r * np.log(alpha)
A_2 = gammaln(a + b) + gammaln(b + x) - gammaln(b) - gammaln(a + b + x)
A_3 = -(r + x) * np.log(alpha + T)
A_4 = np.log(a) - np.log(b + np.maximum(x, 1) - 1) - (r + x) * np.log(t_x + alpha)
max_A_3_A_4 = np.maximum(A_3, A_4)
ll = (A_1 + A_2 + np.log(np.exp(A_3 - max_A_3_A_4) + np.exp(A_4 - max_A_3_A_4) * (x > 0)) + max_A_3_A_4)
return -ll.sum()
def BetaGeoFitter(guess={'r': 0.1, 'alpha': 0.1, 'a': 0.0, 'b': 0.1}):
return minimize(
value_and_grad(negative_log_likelihood),
jac=True,
method=None,
args=(p1x, t_x, T),
tol=1e-7,
x0=list(guess.values()),
options={'disp': True}
)
result = BetaGeoFitter()
r, alpha, a, b = np.exp(result.x)
ll = result.fun
print(
f"""r = {r:0.4f}
α = {alpha:0.4f}
a = {a:0.4f}
b = {b:0.4f}
Log-Likelihood = {-ll:0.4f}"""
)
index = index=["r", "α", "a", "b"]
lifetimes_params = np.exp(result.x)
hessian_mat = hessian(negative_log_likelihood)(result.x, p1x, t_x, T)
var_mat = (lifetimes_params ** 2) * np.linalg.inv(hessian_mat) # Variance-Covariance Matrix
se = np.sqrt(np.diag(var_mat)) # Standard Error
plo = lifetimes_params - 1.96 * se
phi = lifetimes_params + 1.96 * se
pl.DataFrame(
{
"Parameter": index,
"Coef": lifetimes_params,
"SE (Coef)": se,
"5.5%": plo,
"94.5%": phi,
}
)Optimization terminated successfully.
Current function value: 9582.429207
Iterations: 21
Function evaluations: 26
Gradient evaluations: 26
r = 0.2426
α = 4.4136
a = 0.7929
b = 2.4259
Log-Likelihood = -9582.4292
| Parameter | Coef | SE (Coef) | 5.5% | 94.5% |
|---|---|---|---|---|
| str | f64 | f64 | f64 | f64 |
| "r" | 0.242595 | 0.012557 | 0.217982 | 0.267207 |
| "α" | 4.413602 | 0.378224 | 3.672283 | 5.154921 |
| "a" | 0.792922 | 0.185734 | 0.428884 | 1.15696 |
| "b" | 2.425907 | 0.705414 | 1.043295 | 3.808519 |
We will now set-up a Stan model and run both, the optimization algorithm and the MCMC sampler.
import numpy as np
stan_code = '''
data {
int<lower=0> N; // Number of customers
array[N] int<lower=0> X; // Number of transactions per customer
vector<lower=0>[N] T; // Total observation time per customer
vector<lower=0>[N] t_x; // Time of last transaction (0 if X=0)
}
parameters {
real<lower=0> r; // Gamma shape (r)
real<lower=0> alpha; // Gamma scale (alpha)
real<lower=0, upper=5> a; // Beta shape 1 (a)
real<lower=0, upper=5> b; // Beta shape 2 (b)
}
model {
// Weakly informative priors
r ~ weibull(2, 1);
alpha ~ weibull(2, 10);
a ~ uniform(0, 5);
b ~ uniform(0, 5);
for (n in 1:N) {
int x = X[n];
real tx = t_x[n];
real t = T[n];
// Term 1: B(a, b + x)/B(a, b) * Γ(r + x)/Γ(r) * (alpha/(alpha + t))^(r + x)
real log_term1 = (
lbeta(a, b + x) - lbeta(a, b) +
lgamma(r + x) - lgamma(r) +
r * log(alpha) - (r + x) * log(alpha + t)
);
// Term 2: B(a + 1, b + x - 1)/B(a, b) * Γ(r + x)/Γ(r) * (alpha/(alpha + tx))^(r + x)
real log_term2 = negative_infinity();
if (x > 0) {
log_term2 = (
lbeta(a + 1, b + x - 1) - lbeta(a, b) +
lgamma(r + x) - lgamma(r) +
r * log(alpha) - (r + x) * log(alpha + tx)
);
}
target += log_sum_exp(log_term1, log_term2);
}
}
'''data = {
'N': len(p1x),
'X': p1x.flatten().astype(int).tolist(),
'T': T.flatten().tolist(),
't_x': t_x.flatten().tolist()
}
stan_model = StanQuap(
stan_file='stan_models/bg-nbd',
stan_code=stan_code, data=data,
algorithm='LBFGS', jacobian=False,
tol_rel_grad=1e-7, iter=5000
)
index = ["r", "α", "a", "b"]
params = np.array(list(stan_model.opt_model.stan_variables().values()))
var_mat = stan_model.vcov_matrix() # Variance-Covariance Matrix
se = np.sqrt(np.diag(var_mat)) # Standard Error
plo = params - 1.96 * se
phi = params + 1.96 * se
pl.DataFrame(
{
"Parameter": index,
"Coef": params,
"SE (Coef)": se,
"5.5%": plo,
"94.5%": phi,
}
)| Parameter | Coef | SE (Coef) | 5.5% | 94.5% |
|---|---|---|---|---|
| str | f64 | f64 | f64 | f64 |
| "r" | 0.243677 | 0.012593 | 0.218994 | 0.26836 |
| "α" | 4.44674 | 0.379501 | 3.702918 | 5.190562 |
| "a" | 0.792202 | 0.185777 | 0.428078 | 1.156326 |
| "b" | 2.42757 | 0.706726 | 1.042387 | 3.812753 |
x = [
stan_model.opt_model.optimized_params_pd['r'][0],
stan_model.opt_model.optimized_params_pd['alpha'][0],
stan_model.opt_model.optimized_params_pd['a'][0],
stan_model.opt_model.optimized_params_pd['b'][0]
]
print("Log-Likelihood:", bgnbd_ll(x, p1x, t_x, T))
# print(negative_log_likelihood(np.log(np.array(x)), p1x, t_x, T))Log-Likelihood: 9582.433426176987
stan_code = '''
data {
int<lower=0> N; // Number of customers
array[N] int<lower=0> X; // Number of transactions per customer
vector<lower=0>[N] T; // Total observation time per customer
vector<lower=0>[N] t_x; // Time of last transaction (0 if X=0)
}
parameters {
real<lower=0> r; // Shape parameter for the Gamma prior on purchase rate
real<lower=0> alpha; // Scale parameter for purchase rate
real<lower=0, upper=1> phi_dropout; // Mixture weight for dropout process (Uniform prior)
real<lower=1> kappa_dropout; // Scale parameter for dropout (Pareto prior)
}
transformed parameters {
real a = phi_dropout * kappa_dropout; // Dropout shape parameter (controls early dropout likelihood)
real b = (1 - phi_dropout) * kappa_dropout; // Dropout scale parameter (controls later dropout likelihood)
}
model {
// Priors:
r ~ weibull(2, 1); // Prior on r (purchase rate shape parameter)
alpha ~ weibull(2, 10); // Prior on alpha (purchase rate scale parameter)
phi_dropout ~ uniform(0,1); // Mixture component for dropout process
kappa_dropout ~ pareto(1,1); // Scale of dropout process
for (n in 1:N) {
int x = X[n]; // Number of transactions for customer n
real tx = t_x[n]; // Time of last transaction
real t = T[n]; // Total observation time
if (x == 0) {
// Likelihood for customers with zero transactions:
// Probability of no purchases during (0, T): (alpha/(alpha + t))^r
// Likelihood for X=0: (alpha/(alpha + t))^r
target += r * (log(alpha) - log(alpha + t));
} else {
// Term 1: Probability of surviving until T and making x purchases
// Term 1: B(a, b + x)/B(a, b) * Γ(r + x)/Γ(r) * (alpha/(alpha + t))^(r + x)
real beta_term1 = lbeta(a, b + x) - lbeta(a, b); // Beta function term
real gamma_term = lgamma(r + x) - lgamma(r); // Gamma function term
real term1 = gamma_term + beta_term1 + r * log(alpha) - (r + x) * log(alpha + t);
// Term 2: Probability of surviving until t_x, then dropping out
// Term 2: B(a + 1, b + x - 1)/B(a, b) * Γ(r + x)/Γ(r) * (alpha/(alpha + tx))^(r + x)
real beta_term2 = lbeta(a + 1, b + x - 1) - lbeta(a, b);
real term2 = gamma_term + beta_term2 + r * log(alpha) - (r + x) * log(alpha + tx);
// Log-sum-exp for numerical stability
target += log_sum_exp(term1, term2);
}
}
}
'''data = {
'N': len(p1x),
'X': p1x.flatten().astype(int).tolist(),
'T': T.flatten().tolist(),
't_x': t_x.flatten().tolist()
}
stan_model = StanQuap(
stan_file='stan_models/bg-nbd-1',
stan_code=stan_code, data=data,
algorithm='LBFGS', jacobian=False,
tol_rel_grad=1e-7, iter=5000,
generated_var=['a', 'b']
)
index = ["r", "α", 'phi', 'kappa', "a", "b"]
MAP_params = np.array(list(stan_model.opt_model.stan_variables().values()))
var_mat = stan_model.vcov_matrix() # Variance-Covariance Matrix
se = np.sqrt(np.diag(var_mat)) # Standard Error
se = np.concatenate((se, np.array([se[-2] * se[-1] , (1 - se[-2]) * se[-1]])))
plo = MAP_params - 1.96 * se
phi = MAP_params + 1.96 * se
pl.DataFrame(
{
"Parameter": index,
"Coef": MAP_params,
"SE (Coef)": se,
"5.5%": plo,
"94.5%": phi,
}
)| Parameter | Coef | SE (Coef) | 5.5% | 94.5% |
|---|---|---|---|---|
| str | f64 | f64 | f64 | f64 |
| "r" | 0.243721 | 0.0126 | 0.219025 | 0.268417 |
| "α" | 4.4437 | 0.379233 | 3.700404 | 5.186996 |
| "phi" | 0.25224 | 0.019638 | 0.213749 | 0.290731 |
| "kappa" | 2.79762 | 0.717922 | 1.390493 | 4.204747 |
| "a" | 0.705671 | 0.014099 | 0.678038 | 0.733304 |
| "b" | 2.09195 | 0.703823 | 0.712457 | 3.471443 |
x = [
stan_model.opt_model.optimized_params_pd['r'][0],
stan_model.opt_model.optimized_params_pd['alpha'][0],
stan_model.opt_model.optimized_params_pd['a'][0],
stan_model.opt_model.optimized_params_pd['b'][0]
]
print("Log-Likelihood:", bgnbd_ll(x, p1x, t_x, T))
# print(negative_log_likelihood(np.log(np.array(x)), p1x, t_x, T))Log-Likelihood: 9582.570575930175
mcmc = stan_model.stan_model.sample(data=data)
inf_data = az.from_cmdstanpy(mcmc)
print(mcmc.diagnose())
Checking sampler transitions treedepth.
Treedepth satisfactory for all transitions.
Checking sampler transitions for divergences.
No divergent transitions found.
Checking E-BFMI - sampler transitions HMC potential energy.
E-BFMI satisfactory.
Rank-normalized split effective sample size satisfactory for all parameters.
Rank-normalized split R-hat values satisfactory for all parameters.
Processing complete, no problems detected.
mcmc.summary()| Mean | MCSE | StdDev | MAD | 5% | 50% | 95% | ESS_bulk | ESS_tail | R_hat | |
|---|---|---|---|---|---|---|---|---|---|---|
| lp__ | -9587.760000 | 0.035455 | 1.427950 | 1.223150 | -9590.440000 | -9587.450000 | -9586.090000 | 1747.38 | 2409.83 | 1.00219 |
| r | 0.245463 | 0.000281 | 0.012787 | 0.012457 | 0.224842 | 0.245088 | 0.266881 | 2104.33 | 2152.18 | 1.00358 |
| alpha | 4.502950 | 0.008729 | 0.389696 | 0.387018 | 3.877780 | 4.487010 | 5.161450 | 2005.47 | 2372.62 | 1.00332 |
| phi_dropout | 0.247865 | 0.000392 | 0.019520 | 0.019712 | 0.215384 | 0.247804 | 0.279932 | 2491.12 | 2689.87 | 1.00026 |
| kappa_dropout | 3.230250 | 0.019684 | 0.957071 | 0.833273 | 2.031120 | 3.059880 | 4.967460 | 2586.29 | 2438.72 | 1.00011 |
| a | 0.789834 | 0.003795 | 0.197317 | 0.174234 | 0.531605 | 0.759675 | 1.142930 | 2864.23 | 2670.79 | 1.00014 |
| b | 2.440420 | 0.016023 | 0.768334 | 0.661573 | 1.488450 | 2.302790 | 3.847540 | 2536.46 | 2290.93 | 1.00024 |
axes = az.plot_trace(
data=inf_data,
compact=True,
kind="rank_bars",
backend_kwargs={"figsize": (12, 9), "layout": "constrained"},
)
plt.gcf().suptitle("BG/NBD Model Trace", fontsize=18, fontweight="bold")
plt.tight_layout();/var/folders/0s/z9xp988n3j78zfjwg3y616x00000gn/T/ipykernel_27044/1157014386.py:8: UserWarning:
The figure layout has changed to tight
fig, axes = plt.subplots(2, 2, figsize=(12, 9), sharex=False, sharey=False)
axes = axes.flatten()
for i, var_name in enumerate(["r", "alpha", "a", "b"]):
ax = axes[i]
az.plot_posterior(
inf_data.posterior[var_name].values.flatten(),
color="C0",
point_estimate="mean",
ax=ax,
label="MCMC",
)
ax.axvline(x=stan_model.opt_model.stan_variable(var_name), color="C1", linestyle="--", label="Stan MAP")
ax.axvline(x=lifetimes_params[i], color="C2", linestyle="--", label="Lifetimes")
ax.axvline(x=scipy_params[i], color="C3", linestyle="--", label="SciPy")
ax.legend(loc="upper right")
ax.set_title(var_name)
plt.gcf().suptitle("BG/NBD Model Parameters", fontsize=18, fontweight="bold")
plt.tight_layout();The r and alpha purchase rate parameters are quite similar for all three models, but the a and b dropout parameters are better approximated with the phi_dropout and kappa_dropout parameters when fitted with MCMC.
PPCs allow us to check the efficacy of our priors, and the performance of the fitted posteriors. For prior predictive check, we generate random numbers of the parameters based on the prior distribution. In contrast, for posterior predictive check, we generate replicated data according to the posterior predictive distribution.
Note that in addition to generating random numbers of the priors or sampling parameters from the posterior, we are computing the predicted number of transactions for each customer during their observation period \(T\). This will be computed in two ways:
In Stan, we will us Equation 1 and Equation 2 given the nature of the dropout process and the time between transactions:
Transaction Process: \[ \text{Time between transactions} \sim \text{Exponential}(\lambda) \]
Dropout Process: \[ \text{Dropout after transaction} \sim \text{Bernoulli}(p) \]
Population Distributions: \[ \lambda \sim \text{Gamma}(r, \alpha), \quad p \sim \text{Beta}(a, b) \]
We will also compute the number of transactions for each customer using Equation 3.
# probability of observing x purchases in a time period of length t
def P_X_t(x, t, r, alpha, a, b):
A = np.cumsum(
gamma(r+x[:-1])/(gamma(r)*factorial(x[:-1])) *
(t/(alpha+t))**x[:-1], axis=0
)
pmf = (
beta(a,b+x)/beta(a,b) *
gamma(r+x)/(gamma(r)*factorial(x)) *
(alpha/(alpha+t))**r *
(t/(alpha+t))**x
)
pmf[1:] += (
beta(a+1,b+x[1:]-1)/beta(a,b) *
(1 - (alpha/(alpha+t))**r * A)
)
return pmf
def bgnbd_predict_trans_dist(x, n_t, T, r, alpha, a, b):
'''
x - range of purchase quantites, shape (x, 1, 1)
n_t - number of customers who have the time of birth t, shape (n_t,)
t - age of the customer with time origin of 0, shape (1, t, 1)
*params - parameters, shape (1,1,params)
return: shape (x, param)
'''
pmf = P_X_t(x, T, r, alpha, a, b)
pmf[-1] = 1 - np.sum(pmf[:-1], axis=0)
return np.tensordot(pmf, n_t, axes=([1], [0]))Let’s see how the model performs in a prior predictive check, where we sample from the default priors before fitting the model:
stan_prior_code = '''
data {
int<lower=0> N;
vector<lower=0>[N] T;
}
generated quantities {
real r = weibull_rng(2, 1);
real alpha = weibull_rng(2, 10);
real phi_dropout = uniform_rng(0, 1);
real kappa_dropout = pareto_rng(1, 1);
real a = phi_dropout * kappa_dropout;
real b = (1 - phi_dropout) * kappa_dropout;
array[N] int X_rep;
vector[N] t_x_rep;
for (n in 1:N) {
real lambda_n = gamma_rng(r, alpha);
real p_n = beta_rng(a, b);
real current_time = 0;
int x = 0;
real tx = 0;
int active = 1;
while (active && current_time < T[n]) {
real wait = exponential_rng(lambda_n);
if (current_time + wait > T[n]) {
break;
} else {
current_time += wait;
x += 1;
tx = current_time;
if (bernoulli_rng(p_n)) {
active = 0;
}
}
}
X_rep[n] = x;
t_x_rep[n] = tx;
}
}
'''data = {
'N': len(p1x),
'T': T.flatten().tolist(),
}
stan_model = Stan(stan_file='stan_models/bg-nbd-prior', stan_code=stan_prior_code)
prior_fit = stan_model.sample(data=data)
prior_samples = prior_fit.stan_variable('X_rep').astype(int)
# Compute prior frequency distribution
max_purch = 9
prior_freq_counts = np.zeros((prior_samples.shape[0], max_purch + 1))
for i in range(prior_samples.shape[0]):
counts = np.bincount(prior_samples[i], minlength=max_purch + 1)
prior_freq_counts[i] = counts[:max_purch + 1] / data['N']
mean_frequency = prior_freq_counts.mean(axis=0)
hdi = az.hdi(prior_freq_counts, hdi_prob=0.89)
observed_counts = np.bincount(p1x.flatten().astype(int), minlength=max_purch + 1)
observed_frequency = observed_counts[:max_purch + 1] / data['N']
plt.clf()
plt.bar(np.arange(max_purch+1)+0.2, mean_frequency, width=0.4, label='Estimated', color='white', edgecolor='black', linewidth=0.5, yerr=[mean_frequency - hdi[:,0], hdi[:,1] - mean_frequency])
plt.bar(np.arange(max_purch+1)-0.2, observed_frequency, 0.4, label='Observed', color='black')
plt.xlabel(f"Number of Repeat Purchases (0-{max_purch+1}+)")
plt.ylabel("% of Customer Population")
plt.title("Prior Predictive Check: Customer Purchase Frequency")
plt.xticks(ticks=np.arange(max_purch+1), labels=[f'{i}' if i < max_purch else f'{max_purch+1}+' for i in range(max_purch+1)])
plt.legend();/var/folders/0s/z9xp988n3j78zfjwg3y616x00000gn/T/ipykernel_27044/3389536701.py:1: RuntimeWarning:
invalid value encountered in cast
Now, let’s compute the probability of observing \(x\) purchases in a time period of length \(t\) using a closed-form function (Equation 3):
max_purch = 7
params = [prior_fit.stan_variable(param).reshape(1,1, -1) for param in ['r', 'alpha', 'a', 'b']]
T_unique, n_t = np.unique(T, return_counts=True)
frequency_counts = bgnbd_predict_trans_dist(np.arange(max_purch+1).reshape(-1,1,1), n_t, T_unique.reshape(1,-1,1), *params) / data['N']
mean_frequency = np.nanmean(frequency_counts, axis=1)
hdi = az.hdi(frequency_counts.T, hdi_prob=0.89)
observed_counts = np.bincount(p1x.flatten().astype(int), minlength=max_purch + 1)
observed_counts_cen = observed_counts[:max_purch + 1]
observed_counts_cen[-1] += np.sum(observed_counts[max_purch + 1:])
observed_frequency = observed_counts_cen / data['N']
plt.clf()
plt.bar(np.arange(max_purch+1)+0.2, mean_frequency, width=0.4, label='Estimated', color='white', edgecolor='black', linewidth=0.5, yerr=[mean_frequency - hdi[:,0], hdi[:,1] - mean_frequency])
plt.bar(np.arange(max_purch+1)-0.2, observed_frequency, width=0.4, label='Observed', color='black')
plt.xlabel("Number of Repeat Purchases (0-10+)")
plt.ylabel("% of Customer Population")
plt.title("Posterior Predictive Check: Customer Purchase Frequency")
plt.xticks(ticks=np.arange(max_purch+1), labels=[f'{i}' if i < max_purch else f'{max_purch+1}+' for i in range(max_purch+1)])
plt.legend();/var/folders/0s/z9xp988n3j78zfjwg3y616x00000gn/T/ipykernel_27044/477011874.py:8: RuntimeWarning:
invalid value encountered in divide
/var/folders/0s/z9xp988n3j78zfjwg3y616x00000gn/T/ipykernel_27044/477011874.py:14: RuntimeWarning:
invalid value encountered in divide
stan_post_code = '''
data {
int<lower=0> N;
array[N] int<lower=0> X;
vector<lower=0>[N] T;
vector<lower=0>[N] t_x;
}
parameters {
real<lower=0> r;
real<lower=0> alpha;
real<lower=0, upper=1> phi_dropout;
real<lower=1> kappa_dropout;
}
transformed parameters {
real a = phi_dropout * kappa_dropout;
real b = (1 - phi_dropout) * kappa_dropout;
}
model {
// Priors:
r ~ weibull(2, 1);
alpha ~ weibull(2, 10);
phi_dropout ~ uniform(0,1);
kappa_dropout ~ pareto(1,1);
for (n in 1:N) {
int x = X[n];
real tx = t_x[n];
real t = T[n];
if (x == 0) {
target += r * (log(alpha) - log(alpha + t));
} else {
real beta_term1 = lbeta(a, b + x) - lbeta(a, b);
real gamma_term = lgamma(r + x) - lgamma(r);
real term1 = gamma_term + beta_term1 + r * log(alpha) - (r + x) * log(alpha + t);
real beta_term2 = lbeta(a + 1, b + x - 1) - lbeta(a, b);
real term2 = gamma_term + beta_term2 + r * log(alpha) - (r + x) * log(alpha + tx);
target += log_sum_exp(term1, term2);
}
}
}
generated quantities {
array[N] int X_rep;
vector[N] t_x_rep;
for (n in 1:N) {
real lambda_n = gamma_rng(r, alpha);
real p_n = beta_rng(a, b);
real current_time = 0;
int x = 0;
real tx = 0;
int active = 1;
while (active && current_time < T[n]) {
real wait = exponential_rng(lambda_n);
if (current_time + wait > T[n]) {
break;
} else {
current_time += wait;
x += 1;
tx = current_time;
if (bernoulli_rng(p_n)) {
active = 0;
}
}
}
X_rep[n] = x;
t_x_rep[n] = tx;
}
}
'''data = {
'N': len(p1x),
'X': p1x.flatten().astype(int).tolist(),
'T': T.flatten().tolist(),
't_x': t_x.flatten().tolist()
}
stan_model = Stan(stan_file='stan_models/bg-nbd-post', stan_code=stan_post_code)
post_preds = stan_model.sample(data=data)
posterior = az.from_cmdstanpy(post_preds, posterior_predictive=['X_rep', 't_x_rep'])
posterior_samples = post_preds.stan_variable('X_rep').astype(int)
# Compute frequency distribution
max_purch = 9
frequency_counts = np.zeros((posterior_samples.shape[0], max_purch + 1))
for i in range(posterior_samples.shape[0]):
counts = np.bincount(posterior_samples[i], minlength=max_purch + 1)
frequency_counts[i] = counts[:max_purch + 1] / data['N']
# Calculate mean and HDI
mean_frequency = frequency_counts.mean(axis=0)
hdi = az.hdi(frequency_counts, hdi_prob=0.89)
observed_counts = np.bincount(p1x.flatten().astype(int), minlength=max_purch + 1)
observed_frequency = observed_counts[:max_purch + 1] / data['N']
plt.clf()
plt.bar(np.arange(max_purch+1)+0.2, mean_frequency, width=0.4, label='Estimated', color='white', edgecolor='black', linewidth=0.5, yerr=[mean_frequency - hdi[:,0], hdi[:,1] - mean_frequency])
plt.bar(np.arange(max_purch+1)-0.2, observed_frequency, width=0.4, label='Observed', color='black')
plt.xlabel("Number of Repeat Purchases (0-10+)")
plt.ylabel("% of Customer Population")
plt.title("Posterior Predictive Check: Customer Purchase Frequency")
plt.xticks(ticks=np.arange(max_purch+1), labels=[f'{i}' if i < max_purch else f'{max_purch+1}+' for i in range(max_purch+1)])
plt.legend();Now, let’s compute the probability of observing \(x\) purchases in a time period of length \(t\) using a closed-form function (Equation 3):
max_purch = 7
params = [post_preds.stan_variable(param).reshape(1, 1, -1) for param in ['r', 'alpha', 'a', 'b']]
T_unique, n_t = np.unique(T, return_counts=True)
frequency_counts = bgnbd_predict_trans_dist(np.arange(max_purch+1).reshape(-1,1,1), n_t, T_unique.reshape(1,-1,1), *params) / data['N']
mean_frequency = np.mean(frequency_counts, axis=1)
hdi = az.hdi(frequency_counts.T, hdi_prob=0.89)
observed_counts = np.bincount(p1x.flatten().astype(int), minlength=max_purch + 1)
observed_counts_cen = observed_counts[:max_purch + 1]
observed_counts_cen[-1] += np.sum(observed_counts[max_purch + 1:])
observed_frequency = observed_counts_cen / data['N']
plt.clf()
plt.bar(np.arange(max_purch+1)+0.2, mean_frequency, width=0.4, label='Estimated', color='white', edgecolor='black', linewidth=0.5, yerr=[mean_frequency - hdi[:,0], hdi[:,1] - mean_frequency])
plt.bar(np.arange(max_purch+1)-0.2, observed_frequency, width=0.4, label='Observed', color='black')
plt.xlabel("Number of Repeat Purchases (0-10+)")
plt.ylabel("% of Customer Population")
plt.title("Posterior Predictive Check: Customer Purchase Frequency")
plt.xticks(ticks=np.arange(max_purch+1), labels=[f'{i}' if i < max_purch else f'{max_purch+1}+' for i in range(max_purch+1)])
plt.legend();Now that we have fitted the model, we can use it to make predictions. For example, we can predict the expected probability of a customer being alive as a function of time.
Let us take a sample of users:
example_customer_ids = [1, 6, 10, 18, 45, 1412]
p1x_sample = p1x[example_customer_ids]
t_x_sample = t_x[example_customer_ids]
T_sample = T[example_customer_ids]
pl.DataFrame({
'cust_id': example_customer_ids,
'p1x': p1x_sample.astype(int),
't_x': t_x_sample.round(2),
'T': T_sample.round(2)
})| cust_id | p1x | t_x | T |
|---|---|---|---|
| i64 | i64 | f64 | f64 |
| 1 | 1 | 1.71 | 38.86 |
| 6 | 1 | 5.0 | 38.86 |
| 10 | 5 | 24.43 | 38.86 |
| 18 | 3 | 28.29 | 38.71 |
| 45 | 12 | 34.43 | 38.57 |
| 1412 | 14 | 30.29 | 31.57 |
Observe that the last two customers are frequent buyers as compared to the others.
We will use the forward-looking Equation 7 to obtain the expected number of transactions in a future period of length \(t\) for an individual with past observed behavior \(\left(X=x, t_{x}, T\right)\).
def expected_purchases(x, t_x, T, t, r, alpha, a, b):
'''
E(Y(t) | X = x, t_x, T, r, alpha, a, b)
Compute the expected number of future purchases across future *t*
time periods given recency *t_x*, frequency *x*, and age of the customer *T* for each customer.
'''
h2f1_cust = hyp2f1(r+x, b+x, a+b+x-1, t/(alpha + T + t))
return (
(a + b + x - 1) / (a - 1) * (1 - ((alpha + T) / (alpha + T + t))**(r + x) * h2f1_cust) /
(1 + (x > 0) * a / (b + x - 1) * ((alpha + T) / (alpha + t_x))**(r + x))
)future_t = np.arange(91) # Future Periods to analyze
params = [post_preds.stan_variable(param).reshape(-1,1) for param in ['r', 'alpha', 'a', 'b']]
exp_purch = expected_purchases(p1x_sample, t_x_sample, T_sample, future_t.reshape(-1,1,1), *params) # shape (91, 4000, 6)
fig, axes = plt.subplots(nrows=6, ncols=1, figsize=(10, 15), sharex=True, sharey=True,layout="constrained")
for i, cust_id in enumerate(example_customer_ids):
ax = axes[i]
customer_data = exp_purch[:, :, i] # Shape (91, 4000)
mean_purchases = customer_data.mean(axis=1) # Shape (91,)
pi = np.quantile(customer_data, q=[0.055, 0.945], axis=1) # Shape (2, 91)
ax.plot(future_t, mean_purchases, color='k', label='Mean')
ax.fill_between(future_t, pi[0,:], pi[1,:], color='k', alpha=0.2, label='89% PI')
ax.set_title(f'Customer ID: {cust_id}')
ax.set_ylabel('Expected Purchases')
if i == 0: ax.legend()
axes[-1].set_xlabel('Future Time Period')
plt.gcf().suptitle("Expected Number of Purchases", fontweight="bold")
plt.show()Note that the frequent buyers are expected to make more purchases in the future.
We will implement Equation 6 as a Python function:
from scipy.special import expit
def expected_probability_alive(x, t_x, T, r, alpha, a, b):
'''
P(alive at T | X = x, t_x, T, r, alpha, a, b)
Compute the probability a customer with history recency *t_x*,
frequency *x*, and age of the customer *T* is currently active.
'''
log_div = (r + x) * np.log((alpha + T) / (alpha + t_x)) + np.log(
a / (b + np.maximum(x, 1) - 1)
)
return np.where(x == 0, 1.0, expit(-log_div))We now look into the probability of a customer being alive for the next 90 periods:
future_t = np.arange(91)
future_alive = expected_probability_alive(p1x_sample, t_x_sample, (T_sample + future_t.reshape(-1,1,1)), *params)
fig, axes = plt.subplots(nrows=6, ncols=1, figsize=(10, 15), sharex=True, sharey=True,layout="constrained")
for i, cust_id in enumerate(example_customer_ids):
ax = axes[i]
customer_data = future_alive[:, :, i] # Shape (91, 4000)
mean_palive = customer_data.mean(axis=1) # Shape (91,)
pi = np.quantile(customer_data, q=[0.055, 0.945], axis=1) # Shape (2, 91)
ax.plot(future_t, mean_palive, color='k', label='Mean')
ax.fill_between(future_t, pi[0,:], pi[1,:], color='k', alpha=0.2, label='89% PI')
ax.set_title(f'Customer ID: {cust_id}')
ax.set_ylabel('Probability Alive')
if i == 0: ax.legend()
axes[-1].set_xlabel('Future Time Period')
plt.gcf().suptitle("Expected Probability Alive over Time", fontweight="bold")
plt.show()Here are some general remarks:
These plots assume no future purchases.
The decay probability is not the same as it depends in the purchase history of the customer.
The probability of being alive is always decreasing as we are assuming there is no change in the other parameters.
These probabilities are always non-negative, as expected.
For the frequent buyers, the probability of being alive drops very fast as we are assuming no future purchases. It is very important to keep this in mind when interpreting the results.
We will leverage Equation 8
def expected_probability_y_purchase(x, t_x, T, y, t, r, alpha, a, b):
"""
Compute P(Y(t)=y | X=x,t_x,T,r,alpha,a,b)
If broadcasting, y must be scalar.
"""
L_term1 = (
(beta(a, b + x) / beta(a, b)) *
(gamma(r + x) / gamma(r)) *
(alpha**r / (alpha + T)**(r + x))
)
L_term2 = (
(beta(a + 1, b + x - 1) / beta(a, b)) *
(gamma(r + x) / gamma(r)) *
(alpha**r / (alpha + t_x)**(r + x))
) * (x > 0)
L = L_term1 + L_term2
A = (
(beta(a + 1, b + x - 1) / beta(a, b)) *
(gamma(r + x) / gamma(r)) *
(alpha**r / (alpha + t_x)**(r + x))
) * (x > 0) * (y == 0)
B = (
(beta(a, b + x + y) / beta(a, b)) *
(gamma(r + x + y) / gamma(r)) / gamma(y + 1) *
(alpha**r * t**y / (alpha + T + t)**(r + x + y))
)
C = 0
if y > 0:
sum_terms = 0
for j in range(y):
term = gamma(r + x + j) / (gamma(r) * gamma(j + 1))
term *= (alpha**r * t**j) / (alpha + T + t)**(r + x + j)
sum_terms += term
C_part = (
(beta(a + 1, b + x + y - 1) / beta(a, b)) *
((gamma(r + x) / gamma(r)) * (alpha**r / (alpha + T)**(r + x)) - sum_terms)
)
C = C_part * (y > 0)
numerator = A + B + C
return numerator / LWe now look into the probability of a customer making 0 purchases between now, and the next \(t\) periods between 0 and 30.
future_t = np.arange(31) # Future Periods to analyze
expected_probability_zero_purchases = expected_probability_y_purchase(
p1x_sample, t_x_sample, T_sample, 0, future_t.reshape(-1,1,1), *params
) # Shape (time interval, params, customer)
fig, axes = plt.subplots(nrows=6, ncols=1, figsize=(10, 15), sharex=True, sharey=True,layout="constrained")
for i, cust_id in enumerate(example_customer_ids):
ax = axes[i]
customer_data = expected_probability_zero_purchases[:, :, i] # Shape (91, 4000)
mean = customer_data.mean(axis=1) # Shape (91,)
pi = np.quantile(customer_data, q=[0.055, 0.945], axis=1) # Shape (2, 91)
ax.plot(future_t, mean, color='k', label='Mean')
ax.fill_between(future_t, pi[0,:], pi[1,:], color='k', alpha=0.2, label='89% PI')
ax.set_title(f'Customer ID: {cust_id}')
ax.set_ylabel('Probability')
if i == 0: ax.legend()
axes[-1].set_xlabel('Future Time Period')
plt.gcf().suptitle("Expected Probability of Zero Purchases between $(T, T+t]$.", fontweight="bold")
plt.show()Let us look into the probability of a customer making \(0, 1, 2, 3, 4, \text{ or } 5\) purchases between now and 30 period from now.
num_purchases = np.arange(6) # Num of purchases
expected_probability_purchases = np.zeros((len(num_purchases), len(params[0]), len(p1x_sample)))
for y in num_purchases: # Shape (number of purchases, params, customer)
expected_probability_purchases[y, :, :] = expected_probability_y_purchase(
p1x_sample, t_x_sample, T_sample, y, 30, *params
)
fig, axes = plt.subplots(nrows=6, ncols=1, figsize=(10, 15), sharex=True, sharey=True,layout="constrained")
for i, cust_id in enumerate(example_customer_ids):
ax = axes[i]
customer_data = expected_probability_purchases[:, :, i]
mean = customer_data.mean(axis=1)
pi = np.quantile(customer_data, q=[0.055, 0.945], axis=1)
ax.bar(num_purchases, mean, color='white', edgecolor='black', yerr=[mean - pi[0,:], pi[1,:] - mean])
ax.set_title(f'Customer ID: {cust_id}')
ax.set_ylabel('Probability')
axes[-1].set_xlabel('Number of Purchases')
plt.gcf().suptitle("Expected Probability of $y$ Purchases within 30 days from T.", fontweight="bold")
plt.show()In this appendix, we list all the managerially relevant quantities and their expressions.
The likelihood function for a randomly chosen customer with purchase history \((X=x,t_{x},T)\) is given by
\[ \begin{align*} L(r,\alpha,a,b\,|\,X =x,t_{x},T) &= \frac{B(a,b+x)}{B(a,b)}\frac{\Gamma(r+x)\alpha^{r}}{ \Gamma(r)(\alpha+T)^{r+x}} \\ &+ \delta_{x>0}\,\frac{B(a+1,b+x-1)}{B(a,b)}\frac{\Gamma(r+x)\alpha^{r}}{\Gamma(r)(\alpha+t_{x})^{r+x}} \end{align*} \qquad(4)\]
For a randomly chosen customer:
\[ \begin{align*} P(X(t)=x \mid r,\alpha,a,b) &= \frac{B(a,b+x)}{B(a,b)} \frac{\Gamma(r+x)}{\Gamma(r)x!}\left(\frac{\alpha}{\alpha+t}\right)^{r}\left(\frac{t}{\alpha+t}\right)^{x} \\ &+ \delta_{x>0}\,\frac{B(a+1,b+x-1)}{B(a,b)} \\ & \times \left[1-\left(\frac{\alpha}{\alpha+t}\right)^{r}\left\{\sum_{j=0}^{x-1}\frac{\Gamma(r+j)}{\Gamma(r)j!}\left(\frac{t}{ \alpha+t}\right)^{j}\right\} \right] \end{align*} \qquad(5)\]
For a randomly chosen customer:
\[ P(\text{alive at }t \mid r,\alpha,a,b) = \left(\frac{\alpha}{\alpha+t}\right)^{r}{}_{2}F_{1}\left(r,b;a+b;\tfrac{t}{\alpha+t}\right) \]
An expression for \(E[X(t)]\) for a randomly-chosen customer:
\[ E(X(t) | r, \alpha, a, b) = \frac{a + b - 1}{a - 1} \left[1 - \left(\frac{\alpha}{\alpha + t}\right)^{r} {}_{2}F_{1}\left(r, b; a + b - 1; \frac{t}{\alpha + t}\right)\right]. \]
Let the random variable \(Y(t)\) denote the number of purchases made in the period \((T, T + t]\). We are interested in computing the conditional expectation \(E(Y(t) | X = x, t_x, T)\), the expected number of purchases in the period \((T, T + t]\) for a customer with purchase history \(X = x, t_x, T\).
\[ E(Y(t) \mid X = x, t_x, T, r, \alpha, a, b) = \frac{\frac{a + b + x - 1}{a - 1} \left[1 - \left(\frac{\alpha + T}{\alpha + T + t}\right)^{r + x} {}_{2}F_{1}\left(r + x, b + x; a + b + x - 1; \frac{t}{\alpha + T + t}\right)\right]}{1 + \delta_{x > 0} \frac{a}{b + x - 1} \left(\frac{\alpha + T}{\alpha + t_x}\right)^{r + x}}. \]
Below are some expressions that produce the following forward-looking quantities:
\(P(\text{alive}\mid x, t_x, T)\): An expression for the probability that a customer with purchase history \((x, t_x, T)\) is “alive” at time \(T\).
Given the assumptions of the BG/NBD model, the probability that a customer with purchase history \((x, t_x, T)\) is alive at time \(T\) is simply the probability that he did not “die” at \(t_x\).
Recall the BG/NBD likelihood function Equation 4. Noting that the first half of the expression represents the likelihood under the assumption that the customer did not die at \(t_x\) (and is therefore alive at \(T\)), while the second half represents the likelihood under the assumption that the customer died at \(t_x\), the application of Bayes’ theorem gives us
\[ \begin{align*} P(\text{alive} &\mid x,t_x,T,r,\alpha,a,b) \\ &= \frac{\frac{B(a,b+x)}{B(a,b)} \frac{\Gamma(r+x)\alpha^{r}}{\Gamma(r)(\alpha+T)^{r+x}}}{L(r,\alpha,a,b \mid x,t_x,T)} \\ &= \frac{1}{1 + \delta_{x>0} \frac{a}{b+x-1} \left( \frac{\alpha + T}{\alpha + t_x} \right)^{r+x}}. \end{align*} \]
\[ P(\text{alive at }T\,|\,X=x,t_{x},T,r,\alpha,a,b) = 1 \bigg/ \left[1 + \delta_{x>0}\frac{a}{b+x-1}\left(\frac{\alpha + T}{\alpha + t_{x}}\right)^{r+x}\right]. \qquad(6)\]
Note that for \(x > 0\) we can compute the denominator quantity in a log form as:
\[
\text{Denominator} = \log\left(\frac{a}{b + x - 1}\right) + (r + x) \log\left(\frac{\alpha + T}{\alpha + t_x}\right)
\]
Running an expit function on the denominator get us :
\[
\frac{1}{1 + e^{\text{Denominator}}} = \frac{1}{1 + \frac{a}{b+x-1} \left( \frac{\alpha + T}{\alpha + t_x} \right)^{r+x}}
\]
We note that \(P(\text{alive}) = 1\) for a customer who made no purchases in the interval \((0,T]\); this follows from the model’s assumptions that death occurs after a purchase and that customers are alive at the beginning of the observation period. Those interested in the \(P(\text{alive})\) metric may view this as a shortcoming of the model.
There are two ways to overcome this potential problem.
Zero-Inflated Variant:
Assume \(\pi \times 100\) of the customer base is “dead” at \(T = 0\). The likelihood function becomes:
\[ \begin{aligned} L(r,\alpha,a,b,\pi \mid x,t_x,T) &= \pi \delta_{x=0} + (1-\pi) \bigg\{ \frac{B(a,b+x)}{B(a,b)} \frac{\Gamma(r+x)\alpha^{r}}{\Gamma(r)(\alpha+T)^{r+x}} \\ & \quad + \delta_{x>0} \frac{B(a+1,b+x-1)}{B(a,b)} \frac{\Gamma(r+x)\alpha^{r}}{\Gamma(r)(\alpha+t_x)^{r+x}} \bigg\}. \end{aligned} \]
Then:
\[ \begin{align*} P(\text{alive} \mid x,t_x,T,r,\alpha,a,b,\pi) \\ = \begin{cases} \frac{(1-\pi)}{\pi \left( \frac{\alpha+T}{\alpha} \right)^r + (1-\pi)} & x = 0 \\ \frac{1}{1 + \frac{a}{b+x-1} \left( \frac{\alpha+T}{\alpha+t_x} \right)^{r+x}} & x > 0 \end{cases} \end{align*} \]
Flip-at-Time-0 Variant:
Modify the death process assumption:
\[ \begin{align*} L(r,\alpha,a,b \mid x,t_x,T) &= \frac{B(a,b+x+1)}{B(a,b)} \frac{\Gamma(r+x)\alpha^{r}}{\Gamma(r)(\alpha+T)^{r+x}} \\ &+ \frac{B(a+1,b+x)}{B(a,b)} \frac{\Gamma(r+x)\alpha^{r}}{\Gamma(r)(\alpha+t_x)^{r+x}}. \end{align*} \]
Then:
\[ P(\text{alive} \mid x,t_x,T,r,\alpha,a,b) = \frac{1}{1 + \frac{a}{b+x} \left( \frac{\alpha+T}{\alpha+t_x} \right)^{r+x}}, \]
which is less than \(1\) for \(x = 0\) in the interval \((0,T]\).
Let the random variable \(Y(t)\) denote the number of purchases made in \((T, T + t]\). We are interested in computing the conditional expectation \(E(Y (t) \mid X = x, t_x, T)\), the expected number of purchases in \((T, T + t]\) for a customer with purchase history \((X = x, t_x, T)\).
For \(Y(t)\), the expected number of purchases in \((T,T+t]\),
\[ \begin{align*} & E\left(Y(t) \mid X=x, t_{x}, T, r, \alpha, a, b\right)= \\ & \qquad \frac{\frac{a+b+x-1}{a-1}\left[1-\left(\frac{\alpha+T}{\alpha+T+t}\right)^{r+x}{ }_{2} F_{1}\left(r+x, b+x ; a+b+x-1 ; \frac{t}{\alpha+T+t}\right)\right]}{1+\delta_{x>0} \frac{a}{b+x-1}\left(\frac{\alpha+T}{\alpha+t_{x}}\right)^{r+x}} \end{align*} \qquad(7)\]
Below is an expression for the probability of observing \(y\) purchases in the interval \((T,T+t]\) by a customer with purchase history \((X=x,t_{x},T)\).
\[ P(Y(t)=y \mid X=x,t_{x},T,r,\alpha,a,b)=\frac{\delta_{x>0,y=0}\,\mathsf{A}+ \mathsf{B}+\delta_{y>0}\,\mathsf{C}}{L(r,\alpha,a,b \mid X=x,t_{x},T)} \qquad(8)\]
\[ \mathsf{A}=\frac{B(a+1,b+x-1)}{B(a,b)}\frac{\Gamma(r+x)}{\Gamma(r)}\frac{ \alpha^{r}}{(\alpha+t_{x})^{r+x}}, \]
\[ \mathsf{B} =\frac{B(a,b+x+y)}{B(a,b)}\frac{\Gamma(r+x+y)}{\Gamma(r)y!}\frac{ \alpha^{r}t^{y}}{(\alpha+T+t)^{r+x+y}}, \]
\[ \begin{align*} \mathsf{C} &= \frac{B(a+1,b+x+y-1)}{B(a,b)} \\ &\times\left\{\frac{\Gamma(r+x)}{\Gamma(r)}\frac{\alpha^{r} }{(\alpha+T)^{r+x}}-\sum_{j=0}^{y-1}\frac{\Gamma(r+x+j)}{\Gamma(r)j!}\frac{ \alpha^{r}t^{j}}{(\alpha+T+t)^{r+x+j}}\right\} \end{align*} \]